Reservoir Computing (RC) typified by Liquid State Machine (LSM) and Echo State Network (ESN) has been devised as an information processing structure that imitates the cerebellum for motor control. RC is composed of three layers of input layer, reservoir layer that is a kind of recurrent neural network with internal state, and output layer. The most important feature of RC is that the weights to be learned is only the readout connecting the reservoir layer and the output layer. In other words, the reservoir layer will retain the inherent dynamics of the network given at design. The reservoir layer corresponds to the granule layer in the cerebellum because it is easy to increase the number of neurons.
As conducted studies using this RC, we are tacking the following topics.
- Design of the network dynamics to mitigate catastrophic forgetting
- Design of the neuron dynamics for regression and classification of time-series data
In particular, “catastrophic forgetting”, which means that the weights for tasks already learned are completely overwritten by learning other tasks sequentially, must be mitigated to develop truly autonomous robots.